6 research outputs found

    Recent Progress in Wide-Area Surveillance: Protecting Our Pipeline Infrastructure

    Get PDF
    The pipeline industry has millions of miles of pipes buried along the length and breadth of the country. Since none of the areas through which pipelines run are to be used for other activities, it needs to be monitored so as to know whether the right-of-way (RoW) of the pipeline is encroached upon at any point in time. Rapid advances made in the area of sensor technology have enabled the use of high end video acquisition systems to monitor the RoW of pipelines. The images captured by aerial data acquisition systems are affected by a host of factors that include light sources, camera characteristics, geometric positions and environmental conditions. We present a multistage framework for the analysis of aerial imagery for automatic detection and identification of machinery threats along the pipeline RoW which would be capable of taking into account the constraints that come with aerial imagery such as low resolution, lower frame rate, large variations in illumination, motion blurs, etc. The proposed framework is described from three directions. In the first part of the framework, a method is developed to eliminate regions from imagery that are not considered to be a threat to the pipeline. This method makes use of monogenic phase features into a cascade of pre-trained classifiers to eliminate unwanted regions. The second part of the framework is a part-based object detection model for searching specific targets which are considered as threat objects. The third part of the framework is to assess the severity of the threats to pipelines in terms of computing the geolocation and the temperature information of the threat objects. The proposed scheme is tested on the real-world dataset that were captured along the pipeline RoW

    Automatic Building Change Detection in Wide Area Surveillance

    Get PDF
    We present an automated mechanism that can detect and characterize the building changes by analyzing airborne or satellite imagery. The proposed framework can be categorized into three stages: building detection, boundary extraction and change identification. To detect the buildings, we utilize local phase and local amplitude from monogenic signal to extract building features for addressing issues of varying illumination. Then a support vector machine with Radial basis kernel is used for classification. In the boundary extraction stage, a level-set function with self-organizing map based segmentation method is used to find the building boundary and compute physical area of the building segments. In the last stage, the change of the detected building is identified by computing the area differences of the same building that captured at different times. The experiments are conducted on a set of real-life aerial imagery to show the effectiveness of the proposed method

    Robust feature based reconstruction technique to remove rain from video

    No full text
    In the context of extracting information from video, especially in the case of surveillance videos, bad weather conditions can pose a huge challenge. They affect feature extraction processes and hence the performance of other post-processing algorithms. In general, bad weather conditions can be classified into static and dynamic weather conditions. Static weather conditions like haze, fog and smoke cause blurring of features and saturation of intensities in the image. The temporal derivatives of the scene intensities are very low. Dynamic weather conditions like rain and snow have varying effects from frame to frame. The temporal derivative of the scene intensity for any pixel will not be zero in the presence of rain. In essence, the actual scene content is not occluded by rain or snow at all instances in the video sequence. In this research, a new framework is presented to achieve robust reconstruction of videos that are affected by rain. The main challenge is to model the location of rain streaks in a frame. This is due to the fact that the location of rain streaks at any particular instant is completely random. However, the changes in scene intensity caused by rain streaks have a generalized behavior. In addition, the instances in which the actual scene is not occluded is sufficient to enable modeling of an efficient technique to have a robust reconstruction of the scene. The first part of the proposed framework for rain removal is a novel technique to detect rain streaks based on phase congruency features. The features capture all structural edges that are conspicuous to the human visual system. The variation of features from frame to frame can be used to estimate the candidate rain pixels in a frame. In order to reduce the number of false candidates due to global motion, frames are registered using phase correlation. The presence of motion components in a local sense is ignored in this part of the framework. The second part of the proposed framework is a novel reconstruction technique that utilizes information from three different sources, which are intensities of the rain affected pixel, spatial neighbors, and temporal neighbors. An optimal estimate for the actual intensity of the rain affected pixel is made based on the minimization of registration error between frames. An optical flow technique based on local phase information is adopted for registration. This part of the proposed framework for removing rain is modeled such that the presence of local motion will not distort the features in the reconstructed video. The proposed framework is evaluated quantitatively and qualitatively on a variety of videos with varying complexities. The effectiveness of the algorithm is quantitatively verified by computing a no-reference image quality measure on individual frames of the reconstructed video. From a variety of experiments that are performed on output videos, it is shown that the proposed technique performs better than the state-of-the-art techniques. The performance of the proposed technique is evaluated in the case of removing snow from videos as well. It is observed that the method is capable of removing light snow streaks from the video. As part of ongoing research, attempts are being made at making the algorithm run in real-time

    Robust feature based reconstruction technique to remove rain from video

    No full text
    In the context of extracting information from video, especially in the case of surveillance videos, bad weather conditions can pose a huge challenge. They affect feature extraction processes and hence the performance of other post-processing algorithms. In general, bad weather conditions can be classified into static and dynamic weather conditions. Static weather conditions like haze, fog and smoke cause blurring of features and saturation of intensities in the image. The temporal derivatives of the scene intensities are very low. Dynamic weather conditions like rain and snow have varying effects from frame to frame. The temporal derivative of the scene intensity for any pixel will not be zero in the presence of rain. In essence, the actual scene content is not occluded by rain or snow at all instances in the video sequence. In this research, a new framework is presented to achieve robust reconstruction of videos that are affected by rain. The main challenge is to model the location of rain streaks in a frame. This is due to the fact that the location of rain streaks at any particular instant is completely random. However, the changes in scene intensity caused by rain streaks have a generalized behavior. In addition, the instances in which the actual scene is not occluded is sufficient to enable modeling of an efficient technique to have a robust reconstruction of the scene. The first part of the proposed framework for rain removal is a novel technique to detect rain streaks based on phase congruency features. The features capture all structural edges that are conspicuous to the human visual system. The variation of features from frame to frame can be used to estimate the candidate rain pixels in a frame. In order to reduce the number of false candidates due to global motion, frames are registered using phase correlation. The presence of motion components in a local sense is ignored in this part of the framework. The second part of the proposed framework is a novel reconstruction technique that utilizes information from three different sources, which are intensities of the rain affected pixel, spatial neighbors, and temporal neighbors. An optimal estimate for the actual intensity of the rain affected pixel is made based on the minimization of registration error between frames. An optical flow technique based on local phase information is adopted for registration. This part of the proposed framework for removing rain is modeled such that the presence of local motion will not distort the features in the reconstructed video. The proposed framework is evaluated quantitatively and qualitatively on a variety of videos with varying complexities. The effectiveness of the algorithm is quantitatively verified by computing a no-reference image quality measure on individual frames of the reconstructed video. From a variety of experiments that are performed on output videos, it is shown that the proposed technique performs better than the state-of-the-art techniques. The performance of the proposed technique is evaluated in the case of removing snow from videos as well. It is observed that the method is capable of removing light snow streaks from the video. As part of ongoing research, attempts are being made at making the algorithm run in real-time

    Automated Whale Blow Detection in Infrared Video

    No full text
    In this chapter, solutions to the problem of whale blow detection in infrared video are presented. The solutions are considered to be assistive technology that could help whale researchers to sift through hours or days of video without manual intervention. Video is captured from an elevated position along the shoreline using an infrared camera. The presence of whales is inferred from the presence of blows detected in the video. In this chapter, three solutions are proposed for this problem. The first algorithm makes use of a neural network (multi-layer perceptron) for classification, the second uses fractal features and the third solution is using convolutional neural networks. The central idea of all the algorithms is to attempt and model the spatio-temporal characteristics of a whale blow accurately using appropriate mathematical models. We provide a detailed description and analysis of the proposed solutions, the challenges and some possible directions for future research

    Utilizing Local Phase Information to Remove Rain from Video

    No full text
    In the context of extracting information from video, bad weather conditions like rain can have a detrimental effect. In this paper, a novel framework to detect and remove rain streaks from video is proposed. The first part of the proposed framework for rain removal is a technique to detect rain streaks based on phase congruency features. The variation of features from frame to frame is used to estimate the candidate rain pixels in a frame. In order to reduce the number of false candidates due to global motion, frames are registered using phase correlation. The second part of the proposed framework is a novel reconstruction technique that utilizes information from three different sources, which are intensities of the rain affected pixel, spatial neighbors, and temporal neighbors. An optimal estimate for the actual intensity of the rain affected pixel is made based on the minimization of registration error between frames. An optical flow technique using local phase information is adopted for registration. This part of the proposed framework for removing rain is modeled such that the presence of local motion will not distort the features in the reconstructed video. The proposed framework is evaluated quantitatively and qualitatively on a variety of videos with varying complexities. The effectiveness of the algorithm is quantitatively verified by computing a no-reference image quality measure on individual frames of the reconstructed video. From a variety of experiments that are performed on output videos, it is shown that the proposed technique performs better than state-of-the-art techniques
    corecore